We present a generative method to estimate 3D human motion and body shapefrom monocular video. Under the assumption that starting from an initial poseoptical flow constrains subsequent human motion, we exploit flow to findtemporally coherent human poses of a motion sequence. We estimate human motionby minimizing the difference between computed flow fields and the output of anartificial flow renderer. A single initialization step is required to estimatemotion over multiple frames. Several regularization functions enhancerobustness over time. Our test scenarios demonstrate that optical floweffectively regularizes the under-constrained problem of human shape and motionestimation from monocular video.
展开▼